823 research outputs found

    Fast Desynchronization For Decentralized Multichannel Medium Access Control

    Get PDF
    Distributed desynchronization algorithms are key to wireless sensor networks as they allow for medium access control in a decentralized manner. In this paper, we view desynchronization primitives as iterative methods that solve optimization problems. In particular, by formalizing a well established desynchronization algorithm as a gradient descent method, we establish novel upper bounds on the number of iterations required to reach convergence. Moreover, by using Nesterov's accelerated gradient method, we propose a novel desynchronization primitive that provides for faster convergence to the steady state. Importantly, we propose a novel algorithm that leads to decentralized time-synchronous multichannel TDMA coordination by formulating this task as an optimization problem. Our simulations and experiments on a densely-connected IEEE 802.15.4-based wireless sensor network demonstrate that our scheme provides for faster convergence to the steady state, robustness to hidden nodes, higher network throughput and comparable power dissipation with respect to the recently standardized IEEE 802.15.4e-2012 time-synchronized channel hopping (TSCH) scheme.Comment: to appear in IEEE Transactions on Communication

    Learning-Based Symbol Level Precoding: A Memory-Efficient Unsupervised Learning Approach

    Get PDF
    Symbol level precoding (SLP) has been proven to be an effective means of managing the interference in a multiuser downlink transmission and also enhancing the received signal power. This paper proposes an unsupervised-learning based SLP that applies to quantized deep neural networks (DNNs). Rather than simply training a DNN in a supervised mode, our proposal unfolds a power minimization SLP formulation in an imperfect channel scenario using the interior point method (IPM) proximal 'log' barrier function. We use binary and ternary quantizations to compress the DNN's weight values. The results show significant memory savings for our proposals compared to the existing full-precision SLP-DNet with significant model compression of ~ 21× and ~ 13× for both binary DNN-based SLP (RSLP-BDNet) and ternary DNN-based SLP (RSLP-TDNets), respectively

    Complexity-Scalable Neural Network Based MIMO Detection With Learnable Weight Scaling

    Get PDF
    This paper introduces a framework for systematic complexity scaling of deep neural network (DNN) based MIMO detectors. The model uses a fraction of the DNN inputs by scaling their values through weights that follow monotonically non-increasing functions. This allows for weight scaling across and within the different DNN layers in order to achieve scalable complexity-accuracy results. To reduce complexity further, we introduce a regularization constraint on the layer weights such that, at inference, parts (or the entirety) of network layers can be removed with minimal impact on the detection accuracy. We also introduce trainable weight-scaling functions for increased robustness to changes in the activation patterns and a further improvement in the detection accuracy at the same inference complexity. Numerical results show that our approach is 10 and 100-fold less complex than classical approaches based on semi-definite relaxation and ML detection, respectively

    An Unsupervised Learning-Based Approach for Symbol-Level-Precoding

    Get PDF
    This paper proposes an unsupervised learning-based precoding framework that trains deep neural networks (DNNs) with no target labels by unfolding an interior point method (IPM) proximal `log' barrier function. The proximal `log' barrier function is derived from the strict power minimization formulation subject to signal-to-interference-plus-noise ratio (SINR) constraint. The proposed scheme exploits the known interference via symbol-level precoding (SLP) to minimize the transmit power and is named strict Symbol-Level-Precoding deep network (SLP-SDNet). The results show that SLP-SDNet outperforms the conventional block-level-precoding (Conventional BLP) scheme while achieving near-optimal performance faster than the SLP optimization-based approac

    A Memory-Efficient Learning Framework for Symbol Level Precoding with Quantized NN Weights

    Get PDF
    This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on a DNN with realistic finite precision weights and adopt an unsupervised deep learning (DL) based SLP model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtain its corresponding quantized version called SLP-SQDNet. The proposed scheme offers a scalable performance vs memory trade-off, by quantizing a scalable percentage of the DNN weights, and we explore binary and ternary quantizations. Our results show that while SLP-DNet provides near-optimal performance, its quantized versions through SQ yield ~3.46× and ~2.64× model compression for binary-based and ternary-based SLP-SQDNets, respectively. We also find that our proposals offer ~20× and ~10× computational complexity reductions compared to SLP optimization-based and SLP-DNet, respectively

    Modeling Camera Effects to Improve Visual Learning from Synthetic Data

    Full text link
    Recent work has focused on generating synthetic imagery to increase the size and variability of training data for learning visual tasks in urban scenes. This includes increasing the occurrence of occlusions or varying environmental and weather effects. However, few have addressed modeling variation in the sensor domain. Sensor effects can degrade real images, limiting generalizability of network performance on visual tasks trained on synthetic data and tested in real environments. This paper proposes an efficient, automatic, physically-based augmentation pipeline to vary sensor effects --chromatic aberration, blur, exposure, noise, and color cast-- for synthetic imagery. In particular, this paper illustrates that augmenting synthetic training datasets with the proposed pipeline reduces the domain gap between synthetic and real domains for the task of object detection in urban driving scenes

    Studies of Shock Wave Interactions with Homogeneous and Isotropic Turbulence

    Get PDF
    A nearly homogeneous nearly isotropic compressible turbulent flow interacting with a normal shock wave has been studied experimentally in a large shock tube facility. Spatial resolution of the order of 8 Kolmogorov viscous length scales was achieved in the measurements of turbulence. A variety of turbulence generating grids provide a wide range of turbulence scales. Integral length scales were found to substantially decrease through the interaction with the shock wave in all investigated cases with flow Mach numbers ranging from 0.3 to 0.7 and shock Mach numbers from 1.2 to 1.6. The outcome of the interaction depends strongly on the state of compressibility of the incoming turbulence. The length scales in the lateral direction are amplified at small Mach numbers and attenuated at large Mach numbers. Even at large Mach numbers amplification of lateral length scales has been observed in the case of fine grids. In addition to the interaction with the shock the present work has documented substantial compressibility effects in the incoming homogeneous and isotropic turbulent flow. The decay of Mach number fluctuations was found to follow a power law similar to that describing the decay of incompressible isotropic turbulence. It was found that the decay coefficient and the decay exponent decrease with increasing Mach number while the virtual origin increases with increasing Mach number. A mechanism possibly responsible for these effects appears to be the inherently low growth rate of compressible shear layers emanating from the cylindrical rods of the grid

    Simultaneous measurement of the muon neutrino charged-current cross section on oxygen and carbon without pions in the final state at T2K

    Get PDF
    Authors: K. Abe,56 N. Akhlaq,45 R. Akutsu,57 A. Ali,32 C. Alt,11 C. Andreopoulos,54,34 L. Anthony,21 M. Antonova,19 S. Aoki,31 A. Ariga,2 T. Arihara,59 Y. Asada,69 Y. Ashida,32 E. T. Atkin,21 Y. Awataguchi,59 S. Ban,32 M. Barbi,46 G. J. Barker,66 G. Barr,42 D. Barrow,42 M. Batkiewicz-Kwasniak,15 A. Beloshapkin,26 F. Bench,34 V. Berardi,22 L. Berns,58 S. Bhadra,70 S. Bienstock,53 S. Bolognesi,6 T. Bonus,68 B. Bourguille,18 S. B. Boyd,66 A. Bravar,13 D. Bravo Berguño,1 C. Bronner,56 S. Bron,13 A. Bubak,51 M. Buizza Avanzini ,10 T. Campbell,7 S. Cao,16 S. L. Cartwright,50 M. G. Catanesi,22 A. Cervera,19 D. Cherdack,17 N. Chikuma,55 G. Christodoulou,12 M. Cicerchia,24,† J. Coleman,34 G. Collazuol,24 L. Cook,42,28 D. Coplowe,42 A. Cudd,7 A. Dabrowska,15 G. De Rosa,23 T. Dealtry,33 S. R. Dennis,34 C. Densham,54 F. Di Lodovico,30 N. Dokania,39 S. Dolan,12 T. A. Doyle,33 O. Drapier,10 J. Dumarchez,53 P. Dunne,21 A. Eguchi,55 L. Eklund,14 S. Emery-Schrenk,6 A. Ereditato,2 A. J. Finch,33 G. Fiorillo,23 C. Francois,2 M. Friend,16,‡ Y. Fujii,16,‡ R. Fujita,55 D. Fukuda,40 R. Fukuda,60 Y. Fukuda,37 K. Fusshoeller,11 C. Giganti,53 M. Gonin,10 A. Gorin,26 M. Guigue,53 D. R. Hadley,66 J. T. Haigh,66 P. Hamacher-Baumann,49 M. Hartz,62,28 T. Hasegawa,16,‡ S. Hassani,6 N. C. Hastings,16 Y. Hayato,56,28 A. Hiramoto,32 M. Hogan,8 J. Holeczek,51 N. T. Hong Van,20,27 T. Honjo,41 F. Iacob,24 A. K. Ichikawa,32 M. Ikeda,56 T. Ishida,16,‡ M. Ishitsuka,60 K. Iwamoto,55 A. Izmaylov,26 N. Izumi,60 M. Jakkapu,16 B. Jamieson,67 S. J. Jenkins,50 C. Jesús-Valls,18 M. Jiang,32 P. Jonsson,21 C. K. Jung,39,§ X. Junjie,57 P. B. Jurj,21 M. Kabirnezhad,42 A. C. Kaboth,48,54 T. Kajita,57,§ H. Kakuno,59 J. Kameda,56 D. Karlen,63,62 S. P. Kasetti,35 Y. Kataoka,56 Y. Katayama,69 T. Katori,30 Y. Kato,56 E. Kearns,3,28,§ M. Khabibullin,26 A. Khotjantsev,26 T. Kikawa,32 H. Kikutani,55 H. Kim,41 S. King,30 J. Kisiel,51 A. Knight,66 T. Kobata,41 T. Kobayashi,16,‡ L. Koch,42 T. Koga,55 A. Konaka,62 L. L. Kormos,33 Y. Koshio,40,§ A. Kostin,26 K. Kowalik,38 H. Kubo,32 Y. Kudenko,26,∥ N. Kukita,41 S. Kuribayashi,32 R. Kurjata,65 T. Kutter,35 M. Kuze,58 L. Labarga,1 J. Lagoda,38 M. Lamoureux,24 D. Last,43 M. Lawe,33 M. Licciardi,10 R. P. Litchfield,14 S. L. Liu,39 X. Li,39 A. Longhin,24 L. Ludovici,25 X. Lu,42 T. Lux,18 L. N. Machado,23 L. Magaletti,22 K. Mahn,36 M. Malek,50 S. Manly,47 L. Maret,13 A. D. Marino,7 L. Marti-Magro,56,28 T. Maruyama,16,‡ T. Matsubara,16 K. Matsushita,55 V. Matveev,26 C. Mauger,43 K. Mavrokoridis,34 E. Mazzucato,6 N. McCauley,34 J. McElwee,50 K. S. McFarland,47 C. McGrew,39 A. Mefodiev,26 C. Metelko,34 M. Mezzetto,24 A. Minamino,69 O. Mineev,26 S. Mine,5 M. Miura,56,§ L. Molina Bueno,11 S. Moriyama,56,§ Th. A. Mueller,10 L. Munteanu,6 S. Murphy,11 Y. Nagai,7 T. Nakadaira,16,‡ M. Nakahata,56,28 Y. Nakajima,56 A. Nakamura,40 K. Nakamura,28,16,‡ S. Nakayama,56,28 T. Nakaya,32,28 K. Nakayoshi,16,‡ C. E. R. Naseby,21 T. V. Ngoc,20,¶ K. Niewczas,68 K. Nishikawa,16,* Y. Nishimura,29 E. Noah,13 T. S. Nonnenmacher,21 F. Nova,54 P. Novella,19 J. Nowak,33 J. C. Nugent,14 H. M. O’Keeffe,33 L. O’Sullivan,50 T. Odagawa,32 T. Ogawa,16 R. Okada,40 K. Okumura,57,28 T. Okusawa,41 S. M. Oser,4,62 R. A. Owen,45 Y. Oyama,16,‡ V. Palladino,23 V. Paolone,44 M. Pari,24 W. C. Parker,48 S. Parsa,13 J. Pasternak,21 M. Pavin,62 D. Payne,34 G. C. Penn,34 L. Pickering,36 C. Pidcott,50 G. Pintaudi,69 C. Pistillo,2 B. Popov,53,** K. Porwit,51 M. Posiadala-Zezula,64 A. Pritchard,34 B. Quilain,10 T. Radermacher,49 E. Radicioni,22 B. Radics,11 P. N. Ratoff,33 C. Riccio,39 E. Rondio,38 S. Roth,49 A. Rubbia,11 A. C. Ruggeri,23 C. Ruggles,14 A. Rychter,65 K. Sakashita,16,‡ F. Sánchez,13 G. Santucci,70 C. M. Schloesser,11 K. Scholberg,9,§ M. Scott,21 Y. Seiya,41,†† T. Sekiguchi,16,‡ H. Sekiya,56,28,§ D. Sgalaberna,11 A. Shaikhiev,26 A. Shaykina,26 M. Shiozawa,56,28 W. Shorrock,21 A. Shvartsman,26 M. Smy,5 J. T. Sobczyk,68 H. Sobel,5,28 F. J. P. Soler,14 Y. Sonoda,56 S. Suvorov,26,6 A. Suzuki,31 S. Y. Suzuki,16,‡ Y. Suzuki,28 A. A. Sztuc,21 M. Tada,16,‡ M. Tajima,32 A. Takeda,56 Y. Takeuchi,31,28 H. K. Tanaka,56,§ H. A. Tanaka,52,61 S. Tanaka,41 Y. Tanihara,69 N. Teshima,41 L. F. Thompson,50 W. Toki,8 C. Touramanis,34 T. Towstego,61 K. M. Tsui,34 T. Tsukamoto,16,‡ M. Tzanov,35 Y. Uchida,21 M. Vagins,28,5 S. Valder,66 Z. Vallari,39 D. Vargas,18 G. Vasseur,6 W. G. S. Vinning,66 T. Vladisavljevic,54 V. V. Volkov,26 T. Wachala,15 J. Walker,67 J. G. Walsh,33 Y. Wang,39 D. Wark,54,42 M. O. Wascko,21 A. Weber,54,42 R. Wendell,32,§ M. J. Wilking,39 C. Wilkinson,2 J. R. Wilson,30 K. Wood,39 C. Wret,47 K. Yamamoto,41,†† C. Yanagisawa,39,‡‡ G. Yang,39 T. Yano,56 K. Yasutome,32 N. Yershov,26 M. Yokoyama,55,§ T. Yoshida,58 M. Yu,70 A. Zalewska,15 J. Zalipska,38 K. Zaremba,65 G. Zarnecki,38 M. Ziembicki,65 E. D. Zimmerman,7 M. Zito,53 S. Zsoldos,30 and A. Zykova26 (T2K Collaboration)This paper reports the first simultaneous measurement of the double differential muon neutrino chargedcurrent cross section on oxygen and carbon without pions in the final state as a function of the outgoing muon kinematics, made at the ND280 off-axis near detector of the T2K experiment. The ratio of the oxygen and carbon cross sections is also provided to help validate various models’ ability to extrapolate between carbon and oxygen nuclear targets, as is required in T2K oscillation analyses. The data are taken using a neutrino beam with an energy spectrum peaked at 0.6 GeV. The extracted measurement is compared with the prediction from different Monte Carlo neutrino-nucleus interaction event generators, showing particular model separation for very forward-going muons. Overall, of the models tested, the result is best described using local Fermi gas descriptions of the nuclear ground state with RPA suppression

    jClust: a clustering and visualization toolbox

    Get PDF
    jClust is a user-friendly application which provides access to a set of widely used clustering and clique finding algorithms. The toolbox allows a range of filtering procedures to be applied and is combined with an advanced implementation of the Medusa interactive visualization module. These implemented algorithms are k-Means, Affinity propagation, Bron–Kerbosch, MULIC, Restricted neighborhood search cluster algorithm, Markov clustering and Spectral clustering, while the supported filtering procedures are haircut, outside–inside, best neighbors and density control operations. The combination of a simple input file format, a set of clustering and filtering algorithms linked together with the visualization tool provides a powerful tool for data analysis and information extraction
    corecore